Non-Parametric k-Sample Tests for Comparing Forecasting Models

Author: Dmitriy A. Klyushin

POLIBITS, Vol. 62, pp. 33-41, 2020.

Abstract: Business intelligence is impossible without practical tools for assessing the quality of forecasts and comparing forecast models. A naive approach to comparing models by comparing predicted values with observable ones ignores the probabilistic nature of errors. Many models with varying degrees of accuracy are statistically equivalent. Hence, before ranking the models for accuracy, it is necessary to test the statistical hypothesis about the homogeneity of the distributions of their errors. In the presence of several models, the problems of their pairwise and group comparison arise. This chapter provides an overview of the non-parametric tests used in business analysis for pairwise and group comparisons and describes a new non-parametric statistical test that is highly reliable, sensitive, and specific. This test is based on assessing the deviation of the observed relative frequency of an event from its a priori known probability. The prior probability is given by Hill's assumption, and the confidence intervals for the binomial success rate in the Bernoulli scheme are used to estimate its difference from the observed relative frequency. The paper presents the results of computer modeling and comparison of the proposed test with the alternative Kruskal-Wallis test and the Friedman test on artificial and real examples.

Keywords: Business process modeling, error analysis, modeling and prediction, non-parametric statistics

PDF: Non-Parametric k-Sample Tests for Comparing Forecasting Models
PDF: Non-Parametric k-Sample Tests for Comparing Forecasting Models

https://doi.org/10.17562/PB-62-4

 

See table of contents of POLIBITS 62.